Using Prior Probabilities and Density Estimation for Relational Classification
نویسنده
چکیده
A Bayesian method for incorporating probabilistic background knowledge into ILP is presented. Positive only learning is extended to allow density estimation. Estimated densities and deened prior are combined in Bayes theorem to perform relational classiication. An initial application of the technique is made to part-of-speech (POS) tagging. A novel use of Gibbs sampling for POS tagging is given.
منابع مشابه
Recursive Estimation of Proportions in Earth Observations
In Earth observations problems, we usually have the situation of knowing accurately the probability density functions of the several classes of interest, and we need to classify a set of observations with unknown class proportions. The observations are in n-dimensional space, where n is the number of spectral bands. Two recursive algorithms for classifying the observations and estimating the pr...
متن کاملUsing Density Estimation to Improve Text Categorization
This paper explores the use of a statistical technique known as density estimation to potentially improve the results of text categorization systems which label documents by computing similarities between documents and categories. In addition to potentially improving a system's overall accuracy, density estimation converts similarity scores to probabilities. These probabilities provide con denc...
متن کاملInference in Dynamic Probabilistic Relational Models
Stochastic processes that involve the creation and modification of objects and relations over time are widespread, but relatively poorly studied. For example, accurate fault diagnosis in factory assembly processes requires inferring the probabilities of erroneous assembly operations, but doing this efficiently and accurately is difficult. Modeled as dynamic Bayesian networks, these processes ha...
متن کاملVariational problems in machine learning and their solution with finite elements
Many machine learning problems deal with the estimation of conditional probabilities p(y | x) from data (x1, yi), . . . , (xn, yn). This includes classification, regression and density estimation. Given a prior for p(y | x) the maximum a-posteriori method estimates p(y | x) as the most likely probability given the data. This principle can be formulated rigorously using the Cameron-Martin theory...
متن کاملImproved Estimation of Class Prior Probabilities through Unlabeled Data
Work in the classification literature has shown that in computing a classification function, one need not know the class membership of all observations in the training set; the unlabeled observations still provide information on the marginal distribution of the feature set, and can thus contribute to increased classification accuracy for future observations. The present paper will show that thi...
متن کامل